775 research outputs found

    Theory of size-dependent resonance Raman intensities in InP nanocrystals

    Get PDF
    The resonance Raman spectrum of InP nanocrystals is characterized by features ascribable to both longitudinal (LO) and transverse (TO) optical modes. The intensity ratio of these modes exhibits a strong size dependence. To calculate the size dependence of the LO and TO Raman cross sections, we combine existing models of Raman scattering, the size dependence of electronic and vibrational structure, and electron vibration coupling in solids. For nanocrystals with a radius >10 Å, both the LO and TO coupling strengths increase with increasing radius. This, together with an experimentally observed increase in the electronic dephasing rate with decreasing size, allows us to account for the observed ratio of LO/TO Raman intensities

    Germanium quantum dots: Optical properties and synthesis

    Get PDF
    Three different size distributions of Ge quantum dots (>~200, 110, and 60 Å) have been synthesized via the ultrasonic mediated reduction of mixtures of chlorogermanes and organochlorogermanes (or organochlorosilanes) by a colloidal sodium/potassium alloy in heptane, followed by annealing in a sealed pressure vessel at 270 °C. The quantum dots are characterized by transmission electron microscopy, x-ray powder diffraction, x-ray photoemission, infrared spectroscopy, and Raman spectroscopy. Colloidal suspensions of these quantum dots were prepared and their extinction spectra are measured with ultraviolet/visible (UV/Vis) and near infrared (IR) spectroscopy, in the regime from 0.6 to 5 eV. The optical spectra are correlated with a Mie theory extinction calculation utilizing bulk optical constants. This leads to an assignment of three optical features to the E(1), E(0'), and E(2) direct band gap transitions. The E(0') transitions exhibit a strong size dependence. The near IR spectra of the largest dots is dominated by E(0) direct gap absorptions. For the smallest dots the near IR spectrum is dominated by the Gamma25-->L indirect transitions

    Factored Neural Representation for Scene Understanding

    Full text link
    A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). The project webpage is available at: \href\href{https://yushiangw.github.io/factorednerf/}{\text{link}}

    Factored Neural Representation for Scene Understanding

    Get PDF
    A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at: http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf/

    Quantum Error Correcting Codes Using Qudit Graph States

    Full text link
    Graph states are generalized from qubits to collections of nn qudits of arbitrary dimension DD, and simple graphical methods are used to construct both additive and nonadditive quantum error correcting codes. Codes of distance 2 saturating the quantum Singleton bound for arbitrarily large nn and DD are constructed using simple graphs, except when nn is odd and DD is even. Computer searches have produced a number of codes with distances 3 and 4, some previously known and some new. The concept of a stabilizer is extended to general DD, and shown to provide a dual representation of an additive graph code.Comment: Version 4 is almost exactly the same as the published version in Phys. Rev.

    Col-OSSOS: Colors of the Interstellar Planetesimal 1I/`Oumuamua

    Get PDF
    The recent discovery by Pan-STARRS1 of 1I/2017 U1 (`Oumuamua), on an unbound and hyperbolic orbit, offers a rare opportunity to explore the planetary formation processes of other stars, and the effect of the interstellar environment on a planetesimal surface. 1I/`Oumuamua's close encounter with the inner Solar System in 2017 October was a unique chance to make observations matching those used to characterize the small-body populations of our own Solar System. We present near-simultaneous gâ€Č^\prime, râ€Č^\prime, and J photometry and colors of 1I/`Oumuamua from the 8.1-m Frederick C. Gillett Gemini North Telescope, and grigri photometry from the 4.2 m William Herschel Telescope. Our gâ€Č^\primerâ€Č^\primeJ observations are directly comparable to those from the high-precision Colours of the Outer Solar System Origins Survey (Col-OSSOS), which offer unique diagnostic information for distinguishing between outer Solar System surfaces. The J-band data also provide the highest signal-to-noise measurements made of 1I/`Oumuamua in the near-infrared. Substantial, correlated near-infrared and optical variability is present, with the same trend in both near-infrared and optical. Our observations are consistent with 1I/`Oumuamua rotating with a double-peaked period of 8.10±0.428.10 \pm 0.42 hours and being a highly elongated body with an axial ratio of at least 5.3:1, implying that it has significant internal cohesion. The color of the first interstellar planetesimal is at the neutral end of the range of Solar System g−rg-r and r−Jr-J solar-reflectance colors: it is like that of some dynamically excited objects in the Kuiper belt and the less-red Jupiter Trojans.Comment: Accepted to ApJ

    Preparing athletes and teams for the Olympic Games: experiences and lessons learned from the world's best sport psychologists

    Get PDF
    As part of an increased effort to understand the most effective ways to psychologically prepare athletes and teams for Olympic competition, a number of sport psychology consultants have offered best-practice insights into working in this context. These individual reports have typically comprised anecdotal reflections of working with particular sports or countries; therefore, a more holistic approach is needed so that developing practitioners can have access to - and utilise - a comprehensive evidence-base. The purpose of this paper is to provide a panel-type article, which offers lessons and advice for the next generation of aspiring practitioners on preparing athletes and teams for the Olympic Games from some of the world’s most recognised and experienced sport psychologists. The sample comprised 15 sport psychology practitioners who, collectively, have accumulated over 200 years of first-hand experience preparing athletes and/or teams from a range of nations for six summer and five winter Olympic Games. Interviews with the participants revealed 28 main themes and 5 categories: Olympic stressors, success and failure lessons, top tips for neophyte practitioners, differences within one’s own consulting work, and multidisciplinary consulting. It is hoped that the findings of this study can help the next generation of sport psychologists better face the realities of Olympic consultancy and plan their own professional development so that, ultimately, their aspirations to be the world’s best can become a reality

    Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences

    Get PDF
    Multi-object tracking from RGB-D video sequences is a challenging problem due to the combination of changing viewpoints, motion, and occlusions over time. We observe that having the complete geometry of objects aids in their tracking, and thus propose to jointly infer the complete geometry of objects as well as track them, for rigidly moving objects over time. Our key insight is that inferring the complete geometry of the objects significantly helps in tracking. By hallucinating unseen regions of objects, we can obtain additional correspondences between the same instance, thus providing robust tracking even under strong change of appearance. From a sequence of RGB-D frames, we detect objects in each frame and learn to predict their complete object geometry as well as a dense correspondence mapping into a canonical space. This allows us to derive 6DoF poses for the objects in each frame, along with their correspondence between frames, providing robust object tracking across the RGB-D sequence. Experiments on both synthetic and real-world RGB-D data demonstrate that we achieve state-of-the-art performance on dynamic object tracking. Furthermore, we show that our object completion significantly helps tracking, providing an improvement of 6.5% in mean MOTA
    • 

    corecore